Goto

Collaborating Authors

 ethical rule


Principles2Plan: LLM-Guided System for Operationalising Ethical Principles into Plans

Zhong, Tammy, Song, Yang, Pagnucco, Maurice

arXiv.org Artificial Intelligence

Ethical awareness is critical for robots operating in human environments, yet existing automated planning tools provide little support. Manually specifying ethical rules is labour-intensive and highly context-specific. We present Princi-ples2Plan, an interactive research prototype demonstrating how a human and a Large Language Model (LLM) can collaborate to produce context-sensitive ethical rules and guide automated planning. A domain expert provides the planning domain, problem details, and relevant high-level principles such as beneficence and privacy. The system generates op-erationalisable ethical rules consistent with these principles, which the user can review, prioritise, and supply to a planner to produce ethically-informed plans. To our knowledge, no prior system supports users in generating principle-grounded rules for classical planning contexts. Principles2Plan showcases the potential of human-LLM collaboration for making ethical automated planning more practical and feasible.


August 3, 10, 17, 19+24: Legal Evolution: Analytics and Artificial Intelligence in the Law

#artificialintelligence

In the words of former GE CEO, Jack Welch, "If you don't have a competitive advantage, don't compete." This CLE covers how technology has changed the practice of law and how we can (and should) use analytics to our transactional and litigation advantage. Examine how analytics have changed our application of model rules of professional responsibility. Understand how analytics and artificial intelligence are applied in both professional and legal world. Examine how to use and apply analytics in a legal case.


A look back at the Unesco recommendation establishing ethical rules for artificial intelligence - Actu IA

#artificialintelligence

Audrey Azoulay, Director-General of UNESCO, presented last week the first-ever global standard on the ethics of artificial intelligence, adopted by UNESCO's 193 Member States at the international organization's General Conference. UNESCO had highlighted back in November 2019 the need for regulatory frameworks at the national but also international level to ensure that innovative AI technologies can benefit all humanity. This recommendation, the result of the work of 24 international experts appointed on March 11, 2020, sets a global normative framework and gives its member states the responsibility to translate this framework at their level. Over the past decade, AI has experienced a considerable boom. Experts agree that humanity is on the threshold of a new era and that artificial intelligence will transform our lives in ways we cannot imagine.


Austria wants ethical rules on battlefield killer robots

#artificialintelligence

Vienna is embarking on a diplomatic initiative to draw up an ethical framework for the use of killer robots on the battlefields of the future. Foreign Minister Alexander Schallenberg said similar standards should be adopted as those established for landmines and cluster weapons. "We have to create rules before killer robots reach the battlefield of this Earth," Schallenberg told the German newspaper Welt am Sonntag. He said the Austrian government was planning a conference in Vienna in 2021 "to usher in a process "to initiate a process that will hopefully lead to an international convention on the use of artificial intelligence on battlefields." Read more: Should'killer robots' be banned?


Rethinking AI Ethics - Asimov has a lot to answer for

#artificialintelligence

From whence did this concept of AI'Ethics' derive? Digital systems that caused great harm to people via injustice, discrimination or exclusion, privacy or just plain cheating, not to mention the environment, have been with us for decades. Ethical issues in analytics and models did not arise with Big Data, Data Science or AI -- they have been with us for a long time. Was there ever a COBOL Ethics, a DB2 Ethics?, an ERP Ethics (well maybe)? This whole fascination with AI Ethics derives from, in my opinion, Isaac Asimov's Three Laws of Robotics.


Ethical rules of the road needed for artificial intelligence Expert column

#artificialintelligence

In Australia, five major companies are involved in a trial run of eight principles developed as part of the government AI Ethics Framework. The idea behind the principles is to ensure that AI systems benefit individuals, society and the environment; respect human rights; don't discriminate; and uphold privacy rights and data protection.


Robots are Not Your Friend – Ethics and AI STRATECTA

#artificialintelligence

AI is taking over more and more tasks previously done by humans. For the most part this is a good thing; AI algorithms are very good at handling tedious, repetitive tasks that generally make human workers bored to frustration…and mistakes. However, there is growing evidence that AI is not quite the solution we expected it to be. The theory is that a computer should provide an unbiased "opinion," but there is a body of evidence that proves that AI algorithms actually share the biases of the people who code them. As recently as October 2019, the Washington Post discovered that a leading algorithm that helps healthcare workers determine who needs extra care was dramatically favoring white patients over black.


Can the Pentagon's new draft rules actually keep killer robots under control? ZDNet

#artificialintelligence

Killer robots, whether they're the product of scaremongering or a real threat to the international power balance, now have their very own set of ethical rules. However, the newly published Pentagon guidelines on the military use of AI are unlikely to satisfy its critics. The draft guidelines were released late last week by the Defense Innovation Board (DIB), which the Department of Defense (DoD) had tasked in 2018 with producing a set of ethical rules for the use of AI in warfare. The DIB has spent the past 12 months studying AI ethics and principles with academics, lawyers, computer scientists, philosophers, and business leaders – all chaired by ex-Google CEO Eric Schmidt. What they came up with had to align with the DoD's AI Strategy published in 2018, which determines that AI should be used "in a lawful and ethical manner to promote our values".


Why AI needs Ethics, Transparancy and Explanaibility to be successful? 7wData

#artificialintelligence

Big data was everywhere about 10 years ago, now the same is happening to Artificial Intelligence (AI). No single conference we attend, no single article we go through, no executive we talk to, or the term Artificial Intelligence is mentioned. Any self respecting software publisher, has the urge to mention in one way or the other Artificial Intelligence. When we dig deeper, mostly their AI are either automated processes or machine learning based algorithms providing specific models, for very specific and niche use cases. The real AI, as we know it from science fiction movies, is still far off.


Towards Ethical Machines Via Logic Programming

Dyoub, Abeer, Costantini, Stefania, Lisi, Francesca A.

arXiv.org Artificial Intelligence

However the overall aim is not only important for equipping machines with capabilities of moral reasoning, but also for helping us to better understand morality through creating and testing computational models of ethical machines that follow a set of ideal ethical principles. Since the beginning of this century there were several attempts for implementing ethical decision making into intelligent autonomous agents using different approaches. But, no fully descriptive and widely accepted model of moral judgment and decision-making exists. In this work we propose a hybrid logic-based approach for modeling ethical machines, particularly ethical chatbots. As a matter of fact the potential of logic programming (LP) to model moral machines was envisioned by Pereira and Saptawijaya [15].